84 research outputs found
Adaptive Algorithms for Intelligent Acoustic Interfaces
Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications.
One of the main feature of immersive communications is the distant-talking,
i.e. the hands-free (in the broad sense) speech communications without bodyworn
or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms.
The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms.
In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals.
As regards linear adaptive algorithms, a class of adaptive filters based on the
sparse nature of the acoustic impulse response has been recently proposed.
We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature.
On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel.
Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications
Adaptive Algorithms for Intelligent Acoustic Interfaces
Modern speech communications are evolving towards a new direction which involves users in a more perceptive way. That is the immersive experience, which may be considered as the “last-mile” problem of telecommunications.
One of the main feature of immersive communications is the distant-talking,
i.e. the hands-free (in the broad sense) speech communications without bodyworn
or tethered microphones that takes place in a multisource environment where interfering signals may degrade the communication quality and the intelligibility of the desired speech source. In order to preserve speech quality intelligent acoustic interfaces may be used. An intelligent acoustic interface may comprise multiple microphones and loudspeakers and its peculiarity is to model the acoustic channel in order to adapt to user requirements and to environment conditions. This is the reason why intelligent acoustic interfaces are based on adaptive filtering algorithms.
The acoustic path modelling entails a set of problems which have to be taken into account in designing an adaptive filtering algorithm. Such problems may be basically generated by a linear or a nonlinear process and can be tackled respectively by linear or nonlinear adaptive algorithms.
In this work we consider such modelling problems and we propose novel effective adaptive algorithms that allow acoustic interfaces to be robust against any interfering signals, thus preserving the perceived quality of desired speech signals.
As regards linear adaptive algorithms, a class of adaptive filters based on the
sparse nature of the acoustic impulse response has been recently proposed.
We adopt such class of adaptive filters, named proportionate adaptive filters, and derive a general framework from which it is possible to derive any linear adaptive algorithm. Using such framework we also propose some efficient proportionate adaptive algorithms, expressly designed to tackle problems of a linear nature.
On the other side, in order to address problems deriving from a nonlinear process, we propose a novel filtering model which performs a nonlinear transformations by means of functional links. Using such nonlinear model, we propose functional link adaptive filters which provide an efficient solution to the modelling of a nonlinear acoustic channel.
Finally, we introduce robust filtering architectures based on adaptive combinations of filters that allow acoustic interfaces to more effectively adapt to environment conditions, thus providing a powerful mean to immersive speech communications
Learning from distributed data sources using random vector functional-link networks
One of the main characteristics in many real-world big data scenarios is their distributed nature. In a machine learning context, distributed data, together with the requirements of preserving privacy and scaling up to large networks, brings the challenge of designing fully decentralized training protocols. In this paper, we explore the problem of distributed learning when the features of every pattern are available throughout multiple agents (as is happening, for example, in a distributed database scenario). We propose an algorithm for a particular class of neural networks, known as Random Vector Functional-Link (RVFL), which is based on the Alternating Direction Method of Multipliers optimization algorithm. The proposed algorithm allows to learn an RVFL network from multiple distributed data sources, while restricting communication to the unique operation of computing a distributed average. Our experimental simulations show that the algorithm is able to achieve a generalization accuracy comparable to a fully centralized solution, while at the same time being extremely efficient
Widely Linear Kernels for Complex-Valued Kernel Activation Functions
Complex-valued neural networks (CVNNs) have been shown to be powerful
nonlinear approximators when the input data can be properly modeled in the
complex domain. One of the major challenges in scaling up CVNNs in practice is
the design of complex activation functions. Recently, we proposed a novel
framework for learning these activation functions neuron-wise in a
data-dependent fashion, based on a cheap one-dimensional kernel expansion and
the idea of kernel activation functions (KAFs). In this paper we argue that,
despite its flexibility, this framework is still limited in the class of
functions that can be modeled in the complex domain. We leverage the idea of
widely linear complex kernels to extend the formulation, allowing for a richer
expressiveness without an increase in the number of adaptable parameters. We
test the resulting model on a set of complex-valued image classification
benchmarks. Experimental results show that the resulting CVNNs can achieve
higher accuracy while at the same time converging faster.Comment: Accepted at ICASSP 201
Quaternion generative adversarial networks
Latest Generative Adversarial Networks (GANs) are gathering outstanding results through a large-scale training, thus employing models composed of millions of parameters requiring extensive computational capabilities. Building such huge models undermines their replicability and increases the training instability. Moreover, multi-channel data, such as images or audio, are usually processed by real-valued convolutional networks that flatten and concatenate the input, often losing intra-channel spatial relations. To address these issues related to complexity and information loss, we propose a family of quaternion-valued generative adversarial networks (QGANs). QGANs exploit the properties of quaternion algebra, e.g., the Hamilton product, that allows to process channels as a single entity and capture internal latent relations, while reducing by a factor of 4 the overall number of parameters. We show how to design QGANs and to extend the proposed approach even to advanced models. We compare the proposed QGANs with real-valued counterparts on several image generation benchmarks. Results show that QGANs are able to obtain better FID scores than real-valued GANs and to generate visually pleasing images. Furthermore, QGANs save up to 75% of the training parameters. We believe these results may pave the way to novel, more accessible, GANs capable of improving performance and saving computational resources
PHNNs: Lightweight Neural Networks via Parameterized Hypercomplex Convolutions
Hypercomplex neural networks have proven to reduce the overall number of parameters while ensuring valuable performance by leveraging the properties of Clifford algebras. Recently, hypercomplex linear layers have been further improved by involving efficient parameterized Kronecker products. In this article, we define the parameterization of hypercomplex convolutional layers and introduce the family of parameterized hypercomplex neural networks (PHNNs) that are lightweight and efficient large-scale models. Our method grasps the convolution rules and the filter organization directly from data without requiring a rigidly predefined domain structure to follow. PHNNs are flexible to operate in any user-defined or tuned domain, from 1-D to nD regardless of whether the algebra rules are preset. Such a malleability allows processing multidimensional inputs in their natural domain without annexing further dimensions, as done, instead, in quaternion neural networks (QNNs) for 3-D inputs like color images. As a result, the proposed family of PHNNs operates with 1/n free parameters as regards its analog in the real domain. We demonstrate the versatility of this approach to multiple domains of application by performing experiments on various image datasets and audio datasets in which our method outperforms real and quaternion-valued counterparts
Hypercomplex Image-to-Image Translation
Image-to-image translation (I2I) aims at transferring the content
representation from an input domain to an output one, bouncing along different
target domains. Recent I2I generative models, which gain outstanding results in
this task, comprise a set of diverse deep networks each with tens of million
parameters. Moreover, images are usually three-dimensional being composed of
RGB channels and common neural models do not take dimensions correlation into
account, losing beneficial information. In this paper, we propose to leverage
hypercomplex algebra properties to define lightweight I2I generative models
capable of preserving pre-existing relations among image dimensions, thus
exploiting additional input information. On manifold I2I benchmarks, we show
how the proposed Quaternion StarGANv2 and parameterized hypercomplex StarGANv2
(PHStarGANv2) reduce parameters and storage memory amount while ensuring high
domain translation performance and good image quality as measured by FID and
LPIPS scores. Full code is available at: https://github.com/ispamm/HI2I
Diffusion models for audio semantic communication
Directly sending audio signals from a transmitter to a receiver across a
noisy channel may absorb consistent bandwidth and be prone to errors when
trying to recover the transmitted bits. On the contrary, the recent semantic
communication approach proposes to send the semantics and then regenerate
semantically consistent content at the receiver without exactly recovering the
bitstream. In this paper, we propose a generative audio semantic communication
framework that faces the communication problem as an inverse problem, therefore
being robust to different corruptions. Our method transmits lower-dimensional
representations of the audio signal and of the associated semantics to the
receiver, which generates the corresponding signal with a particular focus on
its meaning (i.e., the semantics) thanks to the conditional diffusion model at
its core. During the generation process, the diffusion model restores the
received information from multiple degradations at the same time including
corruption noise and missing parts caused by the transmission over the noisy
channel. We show that our framework outperforms competitors in a real-world
scenario and with different channel conditions. Visit the project page to
listen to samples and access the code:
https://ispamm.github.io/diffusion-audio-semantic-communication/.Comment: Submitted to IEEE ICASSP 202
Learning Speech Emotion Representations in the Quaternion Domain
The modeling of human emotion expression in speech signals is an important,
yet challenging task. The high resource demand of speech emotion recognition
models, combined with the the general scarcity of emotion-labelled data are
obstacles to the development and application of effective solutions in this
field. In this paper, we present an approach to jointly circumvent these
difficulties. Our method, named RH-emo, is a novel semi-supervised architecture
aimed at extracting quaternion embeddings from real-valued monoaural
spectrograms, enabling the use of quaternion-valued networks for speech emotion
recognition tasks. RH-emo is a hybrid real/quaternion autoencoder network that
consists of a real-valued encoder in parallel to a real-valued emotion
classifier and a quaternion-valued decoder. On the one hand, the classifier
permits to optimize each latent axis of the embeddings for the classification
of a specific emotion-related characteristic: valence, arousal, dominance and
overall emotion. On the other hand, the quaternion reconstruction enables the
latent dimension to develop intra-channel correlations that are required for an
effective representation as a quaternion entity. We test our approach on speech
emotion recognition tasks using four popular datasets: Iemocap, Ravdess, EmoDb
and Tess, comparing the performance of three well-established real-valued CNN
architectures (AlexNet, ResNet-50, VGG) and their quaternion-valued equivalent
fed with the embeddings created with RH-emo. We obtain a consistent improvement
in the test accuracy for all datasets, while drastically reducing the
resources' demand of models. Moreover, we performed additional experiments and
ablation studies that confirm the effectiveness of our approach. The RH-emo
repository is available at: https://github.com/ispamm/rhemo.Comment: Paper Submitted to IEEE/ACM Transactions on Audio, Speech and
Language Processin
Enhancing Semantic Communication with Deep Generative Models -- An ICASSP Special Session Overview
Semantic communication is poised to play a pivotal role in shaping the
landscape of future AI-driven communication systems. Its challenge of
extracting semantic information from the original complex content and
regenerating semantically consistent data at the receiver, possibly being
robust to channel corruptions, can be addressed with deep generative models.
This ICASSP special session overview paper discloses the semantic communication
challenges from the machine learning perspective and unveils how deep
generative models will significantly enhance semantic communication frameworks
in dealing with real-world complex data, extracting and exploiting semantic
information, and being robust to channel corruptions. Alongside establishing
this emerging field, this paper charts novel research pathways for the next
generative semantic communication frameworks.Comment: Submitted to IEEE ICASS
- …